Kermeta Programming Language

In my recent adventures researching modeling languages I came across a language not previously mentioned on Lambda-the-Ultimate.org called Kermeta. From the reference manual:

Kermeta is a Domain Specific Language dedicated to metamodel engineering. It fills the gap let by MOF which defines only the structure of meta-models, by adding a way to specify static semantic (similar to OCL) and dynamic semantic (using operational semantic in the operation of the metamodel). Kermeta uses the object-oriented paradigm like Java or Eiffel.

There is a list of published papers related to Kermeta here.

Relating Complexity and Precision in Control Flow Analysis

Relating Complexity and Precision in Control Flow Analysis, David Van Horn and Harry Mairson. ICFP 2007.

We analyze the computational complexity of kCFA, a hierarchy of control flow analyses that determine which functions may be applied at a given call-site. This hierarchy specifies related decision problems, quite apart from any algorithms that may implement their solutions. We identify a simple decision problem answered by this analysis and prove that in the 0CFA case, the problem is complete for polynomial time. The proof is based on a nonstandard, symmetric implementation of Boolean logic within multiplicative linear logic (MLL). We also identify a simpler version of 0CFA related to eta-expansion, and prove that it is complete for logarithmic space, using arguments based on computing paths and permutations.

For any fixed k > 0, it is known that kCFA (and the analogous decision problem) can be computed in time exponential in the program size. For k = 1, we show that the decision problem is NP-hard, and sketch why this remains true for larger fixed values of k. The proof technique depends on using the approximation of CFA as an essentially nondeterministic computing mechanism, as distinct from the exactness of normalization. When k = n, so that the "depth" of the control flow analysis grows linearly in the program length, we show that the decision problem is complete for exponential time.

In addition, we sketch how the analysis presented here may be extended naturally to languages with control operators. All of the insights presented give clear examples of how straightforward observations about linearity, and linear logic, may in turn be used to give a greater understanding of functional programming and program analysis.

There's ton of really good stuff in here; I was particularly fascinated by the fact that 0-CFA is exact for multiplicatively linear programs (ie, that use variables at most once), because linearity guarantees that every lambda can flow to at most one use site.

OCaml Summer Project

I am pleased to announce the second OCaml Summer Project! The OSP is aimed at encouraging growth in the OCaml community by funding students over the summer to work on open-source projects in OCaml. We'll fund up to three months of work, and at the end of the summer, we will fly the participants out for a meeting in New York, where people will present their projects and get a chance to meet with other members of the OCaml community.

The project is being funded and run by Jane Street Capital. Jane Street makes extensive use of OCaml, and we are excited about the idea of encouraging and growing the OCaml community.

Our goal this year is to get both faculty and students involved. To that end, we will require joint applications from the student or students who will be working on the project, and from a faculty member who both recommends the students and will mentor them throughout the project. Each student will receive a grant of $5k/month for over the course of the project, and each faculty member will receive $2k/month. We expect students to treat this as a full-time commitment, and for professors to spend the equivalent of one day a week on the project.

We will also award a prize for what we deem to be the most successful project. Special consideration will be given to projects that display real polish in the form of good documentation, robust build systems, and effective test suites. We'll announce more details about the prize farther down the line.

If you'd like to learn more about the OSP and how to apply, you can look at our website here:

http://ocamlsummerproject.com

Please direct any questions or suggestions you have to osp@janestcapital.com. Also, this might be a nice place for people to leave feedback about the program.

(if one of the editors thought this was appropriate to move to the front page, I would be appreciative. I think it's something that would be of interest to a large part of LtU's readership.)

Arc is released

Make of it what you will, but Arc is now officially released.

This part of Graham's announcement is a gem:

I worry about releasing it, because I don't want there to be forces pushing the language to stop changing. Once you release something and people start to build stuff on top of it, you start to feel you shouldn't change things. So we're giving notice in advance that we're going to keep acting as if we were the only users. We'll change stuff without thinking about what it might break, and we won't even keep track of the changes.

I realize that sounds harsh, but there's a lot at stake. I went to a talk last summer by Guido van Rossum about Python, and he seemed to have spent most of the preceding year switching from one representation of characters to another. I never want to blow a year dealing with characters. Why did Guido have to? Because he had to think about compatibility. But though it seems benevolent to worry about breaking existing code, ultimately there's a cost: it means you spend a year dealing with character sets instead of making the language more powerful.

This sure made me smile...

The YNot Project

The YNot Project

The goal of the Ynot project is to make programming with dependent types practical for a modern programming language. In particular, we are extending the Coq proof assistant to make it possible to write higher-order, imperative and concurrent programs (in the style of Haskell) through a shallow embedding of Hoare Type Theory (HTT). HTT provides a clean separation between pure and effectful computations, makes it possible to formally specify and reason about effects, and is fully compositional.

This was actually commented on here, by Greg Morrisett himself. Conceptually, this seems like it's related to Adam Chlipala's A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language. See, in particular, slides 23-24 of this presentation (PDF). More generally, computation and reflection seem to be gaining recognition as important features for the practical use of such powerful tools as Coq; see also SSREFLECT tactics for Coq and their accompanying paper A Small Scale Reflection Extension for the Coq system (PDF).

It's also interesting to observe that the work appears to depend on the "Program" keyword in the forthcoming Coq 8.2, the work behind which is known as "Russell," as referred to in this thread. Russell's main page in the meantime is here. Again, the point is to simplify programming with dependent types in Coq.

Update: Some preliminary code is available from the project page.

A Model for Formal Parametric Polymorphism: A PER Interpretation for System R

A Model for Formal Parametric Polymorphism: A PER Interpretation for System R , Roberto Bellucci, Martin Abadi, Pierre-Louis Curien. TLCA 1995

System R is an extension of system F that formalizes Reynolds' notion of relational parametricity. In system R, considerably more lambda-terms can be proved equal than in system F: for example, the encoded weak products of F are strong products in R. Also, many "theorems for free" à la Wadler can be proved formally in R. In this paper we describe a semantics for system R. As a first step, we give a precise and general reconstruction of the per model of system F of Bainbridge et al., presenting it as a categorical model in the sense of Seely. Then we interpret System R in this model.

System R is a logic for proving relational parametricity results. It's similar in some ways to Abadi-Plotkin logic, which we have linked to previously on LtU.

Really un-mutable Scheme

Didn't notice this being mentioned on LtU yet: PLT is going really really un-mutable, which seems like a rather cool and worthy experiment to my mind, and will probably highlight some of the more pragmatic and cultural aspects of managing the development of a language in light of the user community.

For PLT Scheme v4.0, we’re going to try it. In our main dialects of Scheme (such as the mzscheme language), cons will create immutable pairs, and pair? and list? will recognize only immutable pairs and lists. The set-car! and set-cdr procedures will not exist. A new set of procedure mcons, mcar, mcdr, set-mcar!, and set-mcdr! will support mutable pairs.

WaveScript

WaveScope is a system for developing distributed, high-rate applications that need to process streams of data from various sources (e.g., sensors) using a combination of signal processing and database (event stream processing) operations. The execution environment for these applications ranges from embedded sensor nodes to multicore/multiprocessor servers.


WaveScript is the programming language used to develop WaveScope applications. It is a high-level, functional, stream-processing language that aims to deliver uncompromising performance. WaveScript programs execute in parallel on multiple cores, or distributed across a network. Its compiler uses aggressive partial evaluation techniques to remove abstractions and reduce the source program to a graph of stream operators.

This came up in the discussion group and since it is cool (both the project and the language), and the other editors are mostly MIA, I thought I'd bring it to the attention of those who only follow the home page.

To get a taste of the language click here.

Recycling Continuations

Recycling Continuations, Jonathan Sobel and Daniel P. Friedman. ICFP 1998

If the continuations in functional data-structure-generating programs are made explicit and represented as records, they can be "recycled." Once they have served their purpose as temporary, intermediate structures for managing program control, the space they occupy can be reused for the structures that the programs produce as their output. To effect this immediate memory reclamation, we use a sequence of correctness-preserving program transformations, demonstrated through a series of simple examples. We then apply the transformations to general anamorphism operators, with the important consequence that all finite-output anamorphisms can now be run without any stack- or continuation-space overhead.

This is a fun paper, and exactly the kind of thing I find addictive: it uses some elegant theory to formalize and systematize a clever hackerly trick.

The Design and Implementation of Typed Scheme

Tobin-Hochstadt, Felleisen. The Design and Implementation of Typed Scheme. POPL 2008.

When scripts in untyped languages grow into large programs, maintaining them becomes difficult. A lack of types in typical script ing languages means that programmers must (re)discover critical piecs of design information every time they wish to change a program. This analysis step both slows down the maintenance process and may even introduce mistakes due to the violation of undiscovered invariants. This paper presents Typed Scheme, an explicitly typed extension of an untyped scripting language. Its type system is based on the novel notion of occurrence typing, which we formalize and mechanically prove sound. The implementation of Typed Scheme additionally borrows elements from a range of approaches, including recursive types, true unions and subtyping, plus polymorphism combined with a modicum of local inference. Initial experiments with the implementation suggest that Typed Scheme naturally accommodates the programming style of the underlying scripting language, at least for the first few thousand lines of ported code.

The key feature of occurrence typing is the ability of the type system to assign distinct types to distinct occurrences of a variable based on control flow criteria. You can think of it as a generalization of the way pattern matching on algebraic datatypes works.

Go to sections 4.4 and 7 to whet your appetite...

Where are all the editors? LtU needs your TLC...